Skip to main content

2.3 Linear Maps

In this section, we will discuss linear maps between vector spaces. A linear map is a function that preserves the operations of vector addition and scalar multiplication.

Table of Contents

Introduction

Just like Hassani, we will begin by reviewing common maps in general.

Some examples of maps are:

  • The map defined by .
  • The map defined by .
  • The map defined by , where and are real-valued functions.
  • The map defined by , which represents a rotation by an angle .
  • The map defined by , which represents a curve in 3D space parameterized by . We often use this to represent the trajectory of a particle in space over time.
  • The map defined by , which represents a translation by the vector .

Let and be vector spaces over the same field , which we will take to be . A linear map (or linear transformation) is a map that satisfies, for all and all scalars ,

The set of all linear maps is denoted by .

If the domain and codomain are the same, i.e. , then we call a linear operator or endomorphism on . When we act a linear map on a vector, we often drop the bracket and write as . The set of all endomorphisms on is denoted by or .

is a vector space, and so is .

In other words, we can add linear maps and multiply them by scalars, and the result is still a linear map. Let's define these operations.

First, the zero map is defined by for all , where is the zero vector in . This map sends every vector in to the zero vector in .

Next, the sum of two linear maps is defined by

Lastly, the scalar multiplication of a linear map by a scalar is defined by

A linear map is called isometric if it preserves the inner product, i.e. for all ,

In other words, an isometric map preserves the lengths of vectors and the angles between them. If , then an isometric map is called an isometry or unitary operator.

Some examples of linear maps are:

  • For any one-dimensional vector space, the only linear maps are the zero map and scalar multiplication by a constant; for some . If is an isometry, then must be a complex number with unit magnitude, i.e. for some .

  • Let be a polynomial in (the space of complex polynomials in ). The map defined by is a linear map, since differentiation is a linear operation. However, it is not an isometry, since it does not preserve the inner product. Similarly, the map defined by is also a linear map, but not an isometry.

    We can write both maps in terms of the basis (the monomials). can be expressed as for some coefficients . Then, we have

  • Rotations and reflections in and are isometries, since they preserve lengths and angles. For example, the rotation map defined by is an isometry.

  • In Minkowski space, the set of all isometries is called the Poincaré group. This group includes translations, rotations, and boosts (changes in velocity). The Poincaré group is important in special relativity, as it describes the symmetries of spacetime.

Two linear maps are equal, i.e. , if and only if for all vectors in a basis of . In other words, linear maps are uniquely determined by their action on a basis.

Let be non-zero vectors in an inner product space . Any endomorphism is equal to the zero map if and only if for all .


Proof.

() If , then for any , we have .

() Conversely, suppose that for all . Since this holds for all , we can choose , which gives us . The inner product of a vector with itself is zero if and only if the vector is the zero vector. Thus, we have for all , which means that .


Similarly,

Theorem 2.3.8

Any endomorphism is equal to the zero map if and only if for all .


Proof.

() If , then for any , we have . This part is trivial.

() Conversely, suppose that for all . Choose a vector for some and . From definitions of linear maps and inner products, we have

where we used the assumption that and . (Prior to applying the assumption, the equation is called the polarization identity.)

Let . Then, we have . Let and . Then, we have . Combining these two equations, we get and for all .

By Theorem 2.3.7, we conclude that .


From what we have learned so far, we can determine if two linear maps are equal in a few ways.

  • if and only if .
  • if and only if for all vectors in a basis of (Box 2.3.6).
  • if and only if for all and (Theorem 2.3.7).

2.3.1 Kernels

For a linear map , the kernel (or null space) of , defined as

forms a subspace of .

Intuitively, if we visualize vectors as arrows on a plane, the kernel is the set of all vectors that get squished down to the zero vector by the linear map . If the kernel is not just the zero vector, then there are directions in the domain that get completely flattened out by the map. Then, the map is not one-to-one, since multiple vectors in the domain can map to the same vector in the codomain (the zero vector). We will formalize this idea later.


Proof. We need to show that the kernel is closed under addition and scalar multiplication, and that it contains the zero vector.

First, it obviously contains the zero vector, since .

Next, let . Then, we have and .

By the linearity of , we have

Thus, , so the kernel is closed under addition.

Finally, let and . Then, we have . By the linearity of , we have

Thus, , so the kernel is closed under scalar multiplication. Therefore, the kernel of is a subspace of .


The image (or range) of a linear map , defined as

forms a subspace of . The dimension of the image is called the rank of , denoted by .


Proof. We need to show that the image is closed under addition and scalar multiplication, and that it contains the zero vector.

First, it obviously contains the zero vector, since .

Next, let . Then, there exist such that and . By the linearity of , we have

Since is the image of , we have , so the image is closed under addition.

Finally, let and . Then, there exists such that . By the linearity of , we have

Thus, , so the image is closed under scalar multiplication. Therefore, the image of is a subspace of .


A linear map is injective (or one-to-one) if and only if .

If the kernel contains only the zero vector, it is known as a trivial kernel. Injectivity means that different vectors in the domain map to different vectors in the codomain.


Proof.

() Suppose that is injective. Then, for any , we have .

Since is injective, the only vector that maps to is itself. Therefore, we must have , which shows that .

() Conversely, suppose that . We need to show that is injective.

Let be such that . Then, we have

Since , it follows that , or . Thus, is injective.


All linear, isometric maps are injective.


Proof. Suppose that is isometric and linear. Let . Then, we have .

By the isometric property of , we have

Since the inner product of a vector with itself is zero if and only if the vector is the zero vector, we have . Therefore, any vector in the kernel must be the zero vector, which means that . By Theorem 2.3.11, we conclude that is injective.


Lastly,

For a linear map with , we have

assuming that both vector spaces are over the same field .


Proof. Let be a basis for , where . We can extend this basis to a basis for by adding vectors , where . Thus, we have a basis for given by .

The vectors are linearly independent and not in the kernel of . Therefore, their images under , i.e. , are also linearly independent in . These images span the image of , i.e. . Thus, they form a basis for .

The number of vectors in this basis is , which is the dimension of the image of , i.e. . Therefore, we have

This completes the proof.


Corollary 2.3.14

In a finite-dimensional vector space, an endomorphism is bijective if it is either injective or surjective.


Proof. Let be an endomorphism on a finite-dimensional vector space .

If is injective, then by Theorem 2.3.11, we have , which means that . By Theorem 2.3.13, we have

Since the rank of is equal to the dimension of , the image of is the entire space , which means that is surjective.

If is surjective, then the image of is the entire space , which means that . By Theorem 2.3.13, we have

This means that , which means that is injective. Therefore, in a finite-dimensional vector space, an endomorphism is bijective if it is either injective or surjective.


We will skip Example 2.3.15; see Hassani for details.

2.3.2 Linear Isomorphisms

We have alluded to the idea of isomorphisms in previous sections. In the context of vector spaces, two vector spaces can look (be notationally) different but still be fundamentally identical in structure. For instance, the vector space of polynomials of degree at most and the vector space of -tuples of real numbers are isomorphic, as they both have dimension and share the same vector space properties. In quantum mechanics, we see that Pauli matrices are isomorphic to the quaternions.

Let and be vector spaces over the same field . and are said to be linearly isomorphic, i.e., , if there exists a bijective linear map .

If , then is called a automorphism on . An automorphism is an invertible linear map on . The set of all automorphisms on is denoted by or . The "GL" stands for "general linear", where the term "linear" refers to linear maps, and "general" indicates that these maps are not restricted to any special properties (like being orthogonal or unitary).

A linear isometry on a finite-dimensional inner product space is an automorphism, i.e., .


Proof. By Theorem 2.3.12, we know that is injective. Since is finite-dimensional, by Corollary 2.3.14, is also surjective. Therefore, is bijective, which means that .


A surjective linear map is an isomorphism if and only if .


Proof.

() Suppose that is an isomorphism. Then, is bijective, which means that it is injective. By Theorem 2.3.11, we have , which means that .

() Conversely, suppose that . Then, , which means that is injective by Theorem 2.3.11. Since is also surjective by assumption, it follows that is bijective. Therefore, is an isomorphism.


Theorem 2.3.19

If is an injective linear map, and is linearly independent, then is also linearly independent.


Proof. Suppose that is linearly independent. We need to show that is also linearly independent.

From the fact that is linearly independent, we have

Now, consider the linear combination of the images under :

Since is injective, the only way for to equal is if . By the linear independence of , this implies that for all . Therefore, is also linearly independent.


If two finite-dimensional vector spaces and are isomorphic, i.e., , then they have the same dimension, i.e., .


Proof. Suppose that is an isomorphism. Then, is bijective, which means that it is both injective and surjective.

Since is injective, by Theorem 2.3.19, the image of any basis of under is a linearly independent set in . Let be a basis for , where . Then, the set is linearly independent in . Since is surjective, the image of under spans . Therefore, the set forms a basis for . Thus, we have . Therefore, if two finite-dimensional vector spaces are isomorphic, they have the same dimension.


This means that for any -dimensional vector space over , we can find an isomorphism to , and likewise for and .

Proposition 2.3.21

Let and be subspaces of such that . Let be an automorphism on . If it leaves one of the summands invariant, i.e., , then it also leaves the other summand invariant, i.e., .


Proof. We can prove this in one line,


Let be a linear map with . Define a linear map as follows. The domain is the quotient space , whose elements are equivalence classes based on the relation if and only if . In other words, .

If is represented by any vector , then we define to act as

We need to show that is well-defined, i.e., if , then . If , then , which means that . Denote the difference as .

Then

where we used the linearity of and the fact that . Next, we show that is linear. Let and . Then,

Thus, is linear. Finally, we show that is bijective. To show injectivity, suppose that . Then, we have . By the linearity of , we have , which means that . Thus, , which implies that .

To show surjectivity, let . Then, there exists such that . Thus, we have . Therefore, is bijective.

By Theorem 2.3.20, since is an isomorphism, we have

By the dimension formula for quotient spaces, we have

Combining these two equations, we get

which is the dimension theorem.

Generally, we have

Theorem 2.3.23

Suppose is a linear map between finite-dimensional vector spaces. Let be a subspace of . We can define a linear map by , where is the equivalence class of in the quotient space . Then, is well-defined and linearly isomorphic.

Lastly, consider the linear map defined by

for all , , and . By the universal property of tensor products and direct sums, is well-defined and linear. It is also bijective, with the inverse map given by

Thus, is a linear isomorphism, meaning

Summary and Next Steps

In this section, we explored various properties of linear maps between vector spaces, including kernels, images, injectivity, and isomorphisms. We established key theorems that connect these concepts, such as the dimension theorem and the characterization of injective maps.

Here are the key points to remember:

  • Definition 2.3.2: A linear map is a function between vector spaces that preserves vector addition and scalar multiplication.
  • Theorem 2.3.9: The kernel of a linear map is a subspace of the domain.
  • Theorem 2.3.10: The image of a linear map is a subspace of the codomain, and its dimension is called the rank.
  • Theorem 2.3.11: A linear map is injective if and only if its kernel is trivial (contains only the zero vector).
  • Theorem 2.3.13: The dimension theorem relates the rank and nullity of a linear map to the dimension of the domain.
  • Theorem 2.3.20: Isomorphic vector spaces have the same dimension.

With these concepts in mind, we will now take a closer look at complex vector spaces and inner product spaces in the next section, which are fundamental in quantum mechanics.